-
Notifications
You must be signed in to change notification settings - Fork 64
Add cupy backend for GPU support
#1048
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 38 files with indirect coverage changes 🚀 New features to boost your workflow:
|
|
TODO
|
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
pre-commit.ci autofix |
for more information, see https://pre-commit.ci
# Please enter a commit message to explain why this merge is necessary, # especially if it merges an updated upstream into a topic branch. # # Lines starting with '#' will be ignored, and an empty message aborts # the commit.
We add a cupy backend https://docs.cupy.dev/en/latest/index.html to run FWL on the GPU.
Strategy: "direct" FWL OLS residualization instead of iterative solver.
Should help with problems with slow convergence.
Only tested on CPU via scipy, where results are matching. Cannot test on GPU as no Mac / no access to Cuda GPU.
Note: we can build a backend using a similar strategy via JAX and torch.